Variational Bayes Estimation of Mixing Coefficients
نویسندگان
چکیده
We investigate theoretically some properties of variational Bayes approximations based on estimating the mixing coefficients of known densities. We show that, with probability 1 as the sample size n grows large, the iterative algorithm for the variational Bayes approximation converges locally to the maximum likelihood estimator at the rate of O(1/n). Moreover, the variational posterior distribution for the parameters is shown to be asymptotically normal with the same mean but a different covariance matrix compared with those for the maximum likelihood estimator. Furthermore we prove that the covariance matrix from the variational Bayes approximation is ‘too small’ compared with that for the MLE, so that resulting interval estimates for the parameters will be unrealistically narrow.
منابع مشابه
Local convergence of variational Bayes estimators for mixing coefficients
In this paper we prove theoretically that for mixture models involving known component densities the variational Bayes estimator converges locally to the maximum likelihood estimator at the rate of O(1/n) in the large sample limit.
متن کاملVariational Bridge Regression
Here we obtain approximate Bayes inferences through variational methods when an exponential power family type prior is specified for the regression coefficients to mimic the characteristics of the Bridge regression. We accomplish this through hierarchical modeling of such priors. Although the mixing distribution is not explicitly stated for scale normal mixtures, we obtain the required moments ...
متن کاملVariational Bayes for generalized autoregressive models
We describe a variational Bayes (VB) learning algorithm for generalized autoregressive (GAR) models. The noise is modeled as a mixture of Gaussians rather than the usual single Gaussian. This allows different data points to be associated with different noise levels and effectively provides robust estimation of AR coefficients. The VB framework is used to prevent overfitting and provides model-o...
متن کاملInference and Parameter Estimation in Gamma Chains
We investigate a class of prior models, called Gamma chains, for modelling depedicies in time-frequencyrepresentations of signals. We assume transform coefficients are drawn independently from Gaussians wherethe latent variances are coupled using Markov chains of inverse Gamma random variables. Exact inference isnot feasible but this model class is conditionally conjugate, so st...
متن کاملGeneralization Performance of Subspace Bayes Approach in Linear Neural Networks
In unidentifiable models, the Bayes estimation has the advantage of generalization performance over the maximum likelihood estimation. However, accurate approximation of the posterior distribution requires huge computational costs. In this paper, we consider an alternative approximation method, which we call a subspace Bayes approach. A subspace Bayes approach is an empirical Bayes approach whe...
متن کامل